1,216 research outputs found

    Remote sensing data from CLARET: A prototype CART data set

    Get PDF
    The data set containing radiation, meteorological , and cloud sensor observations is documented. It was prepared for use by the Department of Energy's Atmospheric Radiation Measurement (ARM) Program and other interested scientists. These data are a precursor of the types of data that ARM Cloud And Radiation Testbed (CART) sites will provide. The data are from the Cloud Lidar And Radar Exploratory Test (CLARET) conducted by the Wave Propagation Laboratory during autumn 1989 in the Denver-Boulder area of Colorado primarily for the purpose of developing new cloud-sensing techniques on cirrus. After becoming aware of the experiment, ARM scientists requested archival of subsets of the data to assist in the developing ARM program. Five CLARET cases were selected: two with cirrus, one with stratus, one with mixed-phase clouds, and one with clear skies. Satellite data from the stratus case and one cirrus case were analyzed for statistics on cloud cover and top height. The main body of the selected data are available on diskette from the Wave Propagation Laboratory or Los Alamos National Laboratory

    The Grizzly, November 30, 1984

    Get PDF
    Board Approves $825 Increase in Annual Tuition, Room and Board • Glick Summons Students: Wood Responds to Roving Reporter • Shorts: Messiah; Spring Jobs; Ice Fishing • Hoop Team Upsets West Chester in Opener • Booters Finish Great Season • Mers and Vers Attend Bloomsburg Invitational • Whatley Plans to Expand Intramurals • Co-ed Volleyball Ends • Fencing in First Competition • Scenes From the Soccer Seasonhttps://digitalcommons.ursinus.edu/grizzlynews/1129/thumbnail.jp

    The Grizzly, February 1, 1985

    Get PDF
    Ursinus Dodges Enrollment Drought • USGA Candidate Petitions Due • Letter: Spanish Prof Encounters Alumnus • New Fogerty Album a Hit • The Beat Goes Public • pro Theatre • Swimmin\u27 Women Boost Record to 3-2 • Grapplers Win 6, Lose 1 • Injured Matman Won\u27t Quit • Mers and Vers Compete During Break • Men\u27s B-ball Defeat Moravian • Lady Bears Trying to Pull Out of Slump • Chinese Star Talks to Bootershttps://digitalcommons.ursinus.edu/grizzlynews/1131/thumbnail.jp

    Data Models for Dataset Drift Controls in Machine Learning With Images

    Full text link
    Camera images are ubiquitous in machine learning research. They also play a central role in the delivery of important services spanning medicine and environmental surveying. However, the application of machine learning models in these domains has been limited because of robustness concerns. A primary failure mode are performance drops due to differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing approaches do not account for explicit models of the primary object of interest: the data. This makes it difficult to create physically faithful drift test cases or to provide specifications of data models that should be avoided when deploying a machine learning model. In this study, we demonstrate how these shortcomings can be overcome by pairing machine learning robustness validation with physical optics. We examine the role raw sensor data and differentiable data models can play in controlling performance risks related to image dataset drift. The findings are distilled into three applications. First, drift synthesis enables the controlled generation of physically faithful drift test cases. The experiments presented here show that the average decrease in model performance is ten to four times less severe than under post-hoc augmentation testing. Second, the gradient connection between task and data models allows for drift forensics that can be used to specify performance-sensitive data models which should be avoided during deployment of a machine learning model. Third, drift adjustment opens up the possibility for processing adjustments in the face of drift. This can lead to speed up and stabilization of classifier training at a margin of up to 20% in validation accuracy. A guide to access the open code and datasets is available at https://github.com/aiaudit-org/raw2logit.Comment: LO and MA contributed equall

    The Grizzly, November 9, 1984

    Get PDF
    Program Board Attends Activities Conference • Fraternity Sponsors Thanksgiving Food Drive • Smith Addresses Value of Liberal Arts • Editorials: Newspaper Reflects Campus News and Views; Pulling the Big Lever • Letters to the Editor: Mock Election Coverage Questioned; Intervention Policy Must Go • College Bowl Season Underway • Ursinus in a Box • Shorts: Ec. Council Programs; Photo Exhibit in Library; Turkey Trot; UPB Trips Scheduled; Dutch Folk Songs; USGA Report • Bear Booters Win ECA Conference Bid • Runners Second in MAC\u27s • A Student\u27s View of The Training Facility • Mers Shine at Relay Carnival • Middle East Forum Scheduled • Theater Review: A Thurber Carnival • Student Teachers Hard at Work • News of Yesteryear: Judy Collins to Appear in Concert at Ursinushttps://digitalcommons.ursinus.edu/grizzlynews/1127/thumbnail.jp

    The Grizzly, November 9, 1984

    Get PDF
    Program Board Attends Activities Conference • Fraternity Sponsors Thanksgiving Food Drive • Smith Addresses Value of Liberal Arts • Editorials: Newspaper Reflects Campus News and Views; Pulling the Big Lever • Letters to the Editor: Mock Election Coverage Questioned; Intervention Policy Must Go • College Bowl Season Underway • Ursinus in a Box • Shorts: Ec. Council Programs; Photo Exhibit in Library; Turkey Trot; UPB Trips Scheduled; Dutch Folk Songs; USGA Report • Bear Booters Win ECA Conference Bid • Runners Second in MAC\u27s • A Student\u27s View of The Training Facility • Mers Shine at Relay Carnival • Middle East Forum Scheduled • Theater Review: A Thurber Carnival • Student Teachers Hard at Work • News of Yesteryear: Judy Collins to Appear in Concert at Ursinushttps://digitalcommons.ursinus.edu/grizzlynews/1127/thumbnail.jp

    Geodetic measurements reveal similarities between post–Last Glacial Maximum and present-day mass loss from the Greenland ice sheet

    Get PDF
    Accurate quantification of the millennial-scale mass balance of the Greenland ice sheet (GrIS) and its contribution to global sea-level rise remain challenging because of sparse in situ observations in key regions. Glacial isostatic adjustment (GIA) is the ongoing response of the solid Earth to ice and ocean load changes occurring since the Last Glacial Maximum (LGM; ~21 thousand years ago) and may be used to constrain the GrIS deglaciation history. We use data from the Greenland Global Positioning System network to directly measure GIA and estimate basin-wide mass changes since the LGM. Unpredicted, large GIA uplift rates of +12 mm/year are found in southeast Greenland. These rates are due to low upper mantle viscosity in the region, from when Greenland passed over the Iceland hot spot about 40 million years ago. This region of concentrated soft rheology has a profound influence on reconstructing the deglaciation history of Greenland. We reevaluate the evolution of the GrIS since LGM and obtain a loss of 1.5-m sea-level equivalent from the northwest and southeast. These same sectors are dominating modern mass loss. We suggest that the present destabilization of these marine-based sectors may increase sea level for centuries to come. Our new deglaciation history and GIA uplift estimates suggest that studies that use the Gravity Recovery and Climate Experiment satellite mission to infer present-day changes in the GrIS may have erroneously corrected for GIA and underestimated the mass loss by about 20 gigatons/year

    Data models for dataset drift controls in machine learning with optical images

    Get PDF
    Camera images are ubiquitous in machine learning research. They also play a central role in the delivery of important public services spanning medicine or environmental surveying. However, the application of machine learning models in these domains has been limited because of robustness concerns. A primary failure mode are performance drops due to differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing approaches do not account for explicit models of machine learning’s primary object of interest: the data. This limits our ability to study and understand the relationship between data generation and downstream machine learning model performance in a physically accurate manner. In this study, we demonstrate how to overcome this limitation by pairing traditional machine learning with physical optics to obtain explicit and differentiable data models. We demonstrate how such data models can be constructed for image data and used to control downstream machine learning model performance related to dataset drift. The findings are distilled into three applications. First, drift synthesis enables the controlled generation of physically faithful drift test cases to power model selection and targeted generalization. Second, the gradient connection between machine learning task model and data model allows advanced, precise tolerancing of task model sensitivity to changes in the data generation. These drift forensics can be used to precisely specify the acceptable data environments in which a task model may be run. Third, drift optimization opens up the possibility to create drifts that can help the task model learn better faster, effectively optimizing the data generating process itself to support the downstream machine vision task. This is an interesting upgrade to existing imaging pipelines which traditionally have been optimized to be consumed by human users but not machine learning models. The data models require access to raw sensor images as commonly processed at scale in industry domains such as microscopy, biomedicine, autonomous vehicles or remote sensing. Alongside the data model code we release two datasets to the public that we collected as part of this work. In total, the two datasets, Raw-Microscopy and Raw-Drone, comprise 1,488 scientifically calibrated reference raw sensor measurements, 8,928 raw intensity variations as well as 17,856 images processed through twelve data models with different configurations. A guide to access the open code and datasets is available at https://github.com/aiaudit-org/raw2logit
    corecore